-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: non-blocking pre bundling of dependencies #6758
Conversation
Writing down some ideas for further improvement related to pre-bundling: ES InteropWe need to await the processing of dependencies at import analysis to add the ES interop code when rewriting imports. It would be great if we could only await at loading time, and let import analysis finish so the browser can keep processing and requesting other files. If there is a way to add the ES interop by patching the entry point generated by esbuild or by creating a virtual proxy to it, we should explore it. Relative paths in _metadata.jsonWe currently use absolute paths in _metadata.json, this means that if the user moves or copy a Vite app folder, the cache is invalid. I think users may be getting hard to report bugs because of this. We could make the paths relative to the root to avoid this issue. @JessicaSachs proposed that users may commit their cache for some use cases, which may be interesting for small playgrounds that are used a lot to speed up things. Relative paths are also needed for this use case. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The flow LGTM! I've tested this branch on a fairly big production app and pre-bundling works as usual. Have a few nitpicks below, but otherwise it's an approval from me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome 🚀
Tested on a few îles sites and it's working nicely! For example, when |
Thanks a lot for testing the PR @ElMassimo! We discussed today with @bluwy and @dominikg, some points we got about the PR:
I pushed a fix for source maps that @vursen tested successfully in their app. He still saw some source maps warnings so there still is something missing there. |
@vursen confirmed that a637809 worked as expected, properly avoiding concurrent runs of optimizeDeps I just pushed a new commit refactor: createOptimizeDepsRun, remove new option 44eca4e to continue the work from 3f9ec98 and clean up the PR after the latest changes:
|
Confirmed that d587774 fixes the issue where 504 errors were sometimes returned for some dependencies because of an outdated |
I tested this out in my project which uses storybook-builder-vite. Previously, if I did not carefully include all of my dependencies in
And in the browser, I would see logs:
Now, with this PR, I see the following in the terminal:
And in the browser, just:
😍 I also confirmed that I had no problems when switching to this PR and running storybook without clearing my previous cache. Awesome work here! This has definitely been a source of confusion and problems for our users in the past, so I'm very glad to see that this is working much better now. |
Close #6508, #6543
Related #6654
Description edited to reflect latest state of the PR
Description
This PR refactors Vite's pre-bundling to achieve:
optimizeDeps.include
). We only need to wait for the scan phase now, and pre-bundling is done in the background.Related PRs
Allowing PRs to flow through the pipeline exposed race conditions related to the in-fly requests when invalidating a module. These issues were present before but it was harder to hit them. The race conditions were fixed by:
Avoid corrupt cache after moving a Vite project folder:
Fixes:
New cache structure
We discovered edge cases because of emptying the
cacheDir
while pre-bundling. fix: sync swap of new processed deps in disk 7f51389 reworks the optimizer to pre-bundle to a temporal folder, swapping the result in sync once it is done. This should resolve other issues reported before this PR, where a source map was requested but was deleted as part of a reprocessing.The commit also fixed an unrelated bug. We were deleting the whole cacheDir, and with it the https certificate. Now the optimized dependencies will be at
cacheDir/deps
, so we can safely delete that without touching other caches. For compat, the cacheDir is emptied if the old structure is found. While processing, the deps live atcacheDir/processing
. This should also ensure that the cache doesn't end up in a corrupted state if there is an error while processing.Changes to the Transform Middleware
We no longer block requests while processing, and that was done before in this middleware. We now need to block as deep inside the pipeline as possible:
optmizeDepsPlugin
)Handling of optimized deps source maps has been also added. To avoid warnings in the browser, we return empty sourcemaps for outdated requests. We confirmed in a large app that this works correctly.
And a 1 second timeout was removed and it isn't performed when waiting for loading in case re-processing takes more time.
New
optimizedDepsPlugin
A new
serve
only plugin is introduced to load optimized files after waiting for itsprocessing
promise to be fulfilled. Files from the cache were loaded through the regular fallback before, but we now block at loading time.Shortcircuit the resolve id for optimized deps URLs
This wasn't needed before in the resolve plugin, but we need to resolve these Ids when the files for the processed deps don't yet exist (something that later checks in the resolve plugin will check for).
Refactor
tryNodeResolve
server._registerMissingImport
now returns the resolved id. Before it was only registering the dep as missing and the regularnode_modules
id was returned. The path to where the optimized dep will be in thecacheDir
is known, we pre-create abrowserHash
for it, and we no longer add the&es-interop
suffix. So this is still a sync call.No longer provided:
optimizeDeps.holdBackServerStart
We decided that we aren't going to expose a new option, and wait until there is a bug report or feat request to add this.
There is a new config optionoptimizeDeps.holdBackServerStart
. Hold back server start when doing the initial pre-bundling. Default is false. Set to true for testing purposes, or in situations where the previous strategy is required. I don't know if this is really needed, but it feels safer to let users opt-in to the old behavior if there are cases where initial pre-bundling is so long that the browser could timeout the requests.OptimizedDepInfo.browserHash
We have now a
browserHash
per dependency info instead of the global one inoptimizeDepsMetadata
. This is needed as missing deps are assigned abrowserHash
when discovered. If files are stable after re-processing, these hashes are kept avoiding a full-reloadOptimizedDepInfo.fileHash
Each optimized entry point gets now a
fileHash
that takes into account its imports to generate a hash that lets us know if an optimized bundle has changed after re-processing.OptimizedDepInfo.processing
Each dependency info gets its own processing promise so we can block the load of the file until it is available and avoid blocking other requests.
_isRunningOptimizer
and_pendingReload
global server states are no longer neededDepOptimizationMetadata.discovered
Discovered dependencies are now exposed to the pipeline so the new
optimizeDepsPlugin
can await until they are processed (see alsocurrentMissing
->metadata.discovered
)Removal of
&es-interop
suffixWe don't know when resolving node dependencies if ES interop is needed as esbuild may still be processing it. Instead of using a suffix in the URL, we now find out if ES interop is needed by looking at the OptimizedDepInfo in the
importAnalysisPlugin
where is needed (after awaiting the processing promise for it)Note:
hash
is used instead ofbrowserHash
for files excluded from optimization. This is unrelated to the PR, but as we need to re-test quite a bit to be able to merge this one, I think we could also check it in.hash
only depends on the lock file, but should be enough here AFAICSWhat is the purpose of this pull request?